63 research outputs found
Differentially Oblivious Turing Machines
Oblivious RAM (ORAM) is a machinery that protects any RAM from leaking information about its secret input by observing only the access pattern. It is known that every ORAM must incur a logarithmic overhead compared to the non-oblivious RAM. In fact, even the seemingly weaker notion of differential obliviousness, which intuitively "protects" a single access by guaranteeing that the observed access pattern for every two "neighboring" logical access sequences satisfy (?,?)-differential privacy, is subject to a logarithmic lower bound.
In this work, we show that any Turing machine computation can be generically compiled into a differentially oblivious one with only doubly logarithmic overhead. More precisely, given a Turing machine that makes N transitions, the compiled Turing machine makes O(N ? log log N) transitions in total and the physical head movements sequence satisfies (?,?)-differential privacy (for a constant ? and a negligible ?). We additionally show that ?(log log N) overhead is necessary in a natural range of parameters (and in the balls and bins model).
As a corollary, we show that there exist natural data structures such as stack and queues (supporting online operations) on N elements for which there is a differentially oblivious implementation on a Turing machine incurring amortized O(log log N) overhead per operation, while it is known that any oblivious implementation must consume ?(log N) operations unconditionally even on a RAM. Therefore, we obtain the first unconditional separation between obliviousness and differential obliviousness in the most natural setting of parameters where ? is a constant and ? is negligible. Before this work, such a separation was only known in the balls and bins model. Note that the lower bound applies in the RAM model while our upper bound is in the Turing machine model, making our separation stronger
Negation-Limited Formulas
We give an efficient structural decomposition theorem for formulas that depends on their negation complexity and demonstrate its power with the following applications.
We prove that every formula that contains t negation gates can be shrunk using a random restriction to a formula of size O(t) with the shrinkage exponent of monotone formulas. As a result, the shrinkage exponent of formulas that contain a constant number of negation gates is equal to the shrinkage exponent of monotone formulas.
We give an efficient transformation of formulas with t negation gates to circuits with log(t) negation gates. This transformation provides a generic way to cast results for negation-limited circuits to the setting of negation-limited formulas. For example, using a result of Rossman (CCC\u2715), we obtain an average-case lower bound for formulas of polynomial-size on n variables with n^{1/2-epsilon} negations.
In addition, we prove a lower bound on the number of negations required to compute one-way permutations by polynomial-size formulas
Communication with Contextual Uncertainty
We introduce a simple model illustrating the role of context in communication
and the challenge posed by uncertainty of knowledge of context. We consider a
variant of distributional communication complexity where Alice gets some
information and Bob gets , where is drawn from a known
distribution, and Bob wishes to compute some function (with high
probability over ). In our variant, Alice does not know , but only
knows some function which is an approximation of . Thus, the function
being computed forms the context for the communication, and knowing it
imperfectly models (mild) uncertainty in this context.
A naive solution would be for Alice and Bob to first agree on some common
function that is close to both and and then use a protocol for
to compute . We show that any such agreement leads to a large overhead
in communication ruling out such a universal solution.
In contrast, we show that if has a one-way communication protocol with
complexity in the standard setting, then it has a communication protocol
with complexity in the uncertain setting, where denotes
the mutual information between and . In the particular case where the
input distribution is a product distribution, the protocol in the uncertain
setting only incurs a constant factor blow-up in communication and error.
Furthermore, we show that the dependence on the mutual information is
required. Namely, we construct a class of functions along with a non-product
distribution over for which the communication complexity is a single
bit in the standard setting but at least bits in the
uncertain setting.Comment: 20 pages + 1 title pag
Leakage Resilient One-Way Functions: The Auxiliary-Input Setting
Most cryptographic schemes are designed in a model where perfect secrecy of the secret key is assumed. In most physical implementations, however, some form of information leakage is inherent and unavoidable. To deal with this, a flurry of works showed how to construct basic cryptographic primitives that are resilient to various forms of leakage.
Dodis et al. (FOCS \u2710) formalized and constructed leakage resilient one-way functions. These are one-way functions such that given a random image and leakage it is still hard to invert . Based on any one-way function, Dodis et al. constructed such a one-way function that is leakage resilient assuming that an attacker can leak any lossy function g of the input.
In this work we consider the problem of constructing leakage resilient one-way functions that are secure with respect to arbitrary computationally hiding leakage (a.k.a auxiliary-input). We consider both types of leakage --- selective and adaptive --- and prove various possibility and impossibility results.
On the negative side, we show that if the leakage is an adaptively-chosen arbitrary one-way function, then it is impossible to construct leakage resilient one-way functions. The latter is proved both in the random oracle model (without any further assumptions) and in the standard model based on a strong vector-variant of DDH. On the positive side, we observe that when the leakage is chosen ahead of time, there are leakage resilient one-way functions based on a variety of assumption
Recommended from our members
A Lower Bound for Adaptively-Secure Collective Coin-Flipping Protocols
In 1985, Ben-Or and Linial (Advances in Computing Research \u2789) introduced the collective coin-flipping problem, where n parties communicate via a single broadcast channel and wish to generate a common random bit in the presence of adaptive Byzantine corruptions. In this model, the adversary can decide to corrupt a party in the course of the protocol as a function of the messages seen so far. They showed that the majority protocol, in which each player sends a random bit and the output is the majority value, tolerates O(sqrt n) adaptive corruptions. They conjectured that this is optimal for such adversaries.
We prove that the majority protocol is optimal (up to a poly-logarithmic factor) among all protocols in which each party sends a single, possibly long, message.
Previously, such a lower bound was known for protocols in which parties are allowed to send only a single bit (Lichtenstein, Linial, and Saks, Combinatorica \u2789), or for symmetric protocols (Goldwasser, Kalai, and Park, ICALP \u2715)
From Minicrypt to Obfustopia via Private-Key Functional Encryption
Private-key functional encryption enables fine-grained access to symmetrically-encrypted data. Although private-key functional encryption (supporting an unbounded number of keys and ciphertexts) seems significantly weaker than its public-key variant, its known realizations all rely on public-key functional encryption. At the same time, however, up until recently it was not known to imply any public-key primitive, demonstrating our poor understanding of this extremely-useful primitive.
Recently, Bitansky et al. [TCC \u2716B] showed that sub-exponentially-secure private-key function encryption bridges from nearly-exponential security in Minicrypt to slightly super-polynomial security in Cryptomania, and from sub-exponential security in Cryptomania to Obfustopia. Specifically, given any sub-exponentially-secure private-key functional encryption scheme and a nearly-exponentially-secure one-way function, they constructed a public-key encryption scheme with slightly super-polynomial security. Assuming, in addition, a sub-exponentially-secure public-key encryption scheme, they then constructed an indistinguishability obfuscator.
We settle the problem of positioning private-key functional encryption within the hierarchy of cryptographic primitives by placing it in Obfustopia. First, given any quasi-polynomially-secure private-key functional encryption scheme, we construct an indistinguishability obfuscator for circuits with inputs of poly-logarithmic length. Then, we observe that such an obfuscator can be used to instantiate many natural applications of indistinguishability obfuscation. Specifically, relying on sub-exponentially-secure one-way functions, we show that quasi-polynomially-secure private-key functional encryption implies not just public-key encryption but leads all the way to public-key functional encryption for circuits with inputs of poly-logarithmic length. Moreover, relying on sub-exponentially-secure injective one-way functions, we show that quasi-polynomially-secure private-key functional encryption implies a hard-on-average distribution over instances of a PPAD-complete problem.
Underlying our constructions is a new transformation from single-input functional encryption to multi-input functional encryption in the private-key setting. The previously known such transformation [Brakerski et al., EUROCRYPT \u2716] required a sub-exponentially-secure single-input scheme, and obtained a scheme supporting only a slightly super-constant number of inputs. Our transformation both relaxes the underlying assumption and supports more inputs: Given any quasi-polynomially-secure single-input scheme, we obtain a scheme supporting a poly-logarithmic number of inputs
New Bounds on the Local Leakage Resilience of Shamir\u27s Secret Sharing Scheme
We study the local leakage resilience of Shamir\u27s secret sharing scheme. In Shamir\u27s scheme, a random polynomial of degree is sampled over a field of size , conditioned on for a secret . Any shares can be used to fully recover and thereby . But, any evaluations of at non-zero coordinates are completely independent of . Recent works ask whether the secret remains hidden even if say only 1 bit of information is leaked from each share, independently. This question is well motivated due to the wide range of applications of Shamir\u27s scheme. For instance, it is known that if Shamir\u27s scheme is leakage resilient in some range of parameters, then known secure computation protocols are secure in a local leakage model.
Over characteristic 2 fields, the answer is known to be negative (e.g., Guruswami and Wootters, STOC \u2716). Benhamouda, Degwekar, Ishai, and Rabin (CRYPTO \u2718) were the first to give a positive answer assuming computation is done over prime-order fields. They showed that if , then Shamir\u27s scheme is leakage resilient. Since then, there has been extensive efforts to improve the above threshold and after a series of works, the current record shows leakage resilience for (Maji et al., ISIT \u2722). All existing analyses of Shamir\u27s leakage resilience for general leakage functions follow a single framework for which there is a known barrier for any .
In this work, we a develop a new analytical framework that allows us to significantly improve upon the previous record and obtain additional new results. Specifically, we show:
Shamir\u27s scheme is leakage resilient for any .
If the leakage functions are guaranteed to be ``balanced\u27\u27 (i.e., splitting the domain of possible shares into 2 roughly equal-size parts), then Shamir\u27s scheme is leakage resilient for any .
If the leakage functions are guaranteed to be ``unbalanced\u27\u27 (i.e., splitting the domain of possible shares into 2 parts of very different sizes), then Shamir\u27s scheme is leakage resilient as long as . Such a result is impossible to obtain using the previously known technique.
All of the above apply more generally to any MDS codes-based secret sharing scheme.
Confirming leakage resilience is most important in the range , as in many applications, Shamir’s scheme is used with thresholds . As opposed to the previous approach, ours does not seem to have a barrier at , as demonstrated by our third contribution
Scalable Agreement Protocols with Optimal Optimistic Efficiency
Designing efficient distributed protocols for various agreement tasks such as Byzantine Agreement, Broadcast, and Committee Election is a fundamental problem. We are interested in protocols for these tasks, where each (honest) party communicates a number of bits which is sublinear in , the number of parties. The first major step towards this goal is due to King et al. (SODA 2006) who showed a protocol where each party sends only bits throughout rounds, but guarantees only that fraction of honest parties end up agreeing on a consistent output, assuming constant fraction of static corruptions. Few years later, King et al. (ICDCN 2011) managed to get a full agreement protocol in the same model but where each party sends bits throughout rounds. Getting a full agreement protocol with communication per party has been a major challenge ever since.
In light of this barrier, we propose a new framework for designing efficient agreement protocols. Specifically, we design -round protocols for all of the above tasks (assuming constant fraction of static corruptions) with optimistic and pessimistic guarantees:
: In an honest execution, (honest) parties send only bits.
xxx : In any other case, (honest) parties send bits.
Thus, all an adversary can gain from deviating from the honest execution is that honest parties will need to work harder (i.e., transmit more bits) to reach agreement and terminate. Besides the above agreement tasks, we also use our new framework to get a scalable secure multiparty computation (MPC) protocol with optimistic and pessimistic complexities.
Technically, we identify a relaxation of Byzantine Agreement (of independent interest) that allows us to fall-back to a pessimistic execution in a coordinated way by all parties. We implement this relaxation with communication bits per party and within rounds
- …